-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use projected token volume for hostNetwork pods. #428
Conversation
3b2b99f
to
557c193
Compare
9afba55
to
2f4c6e4
Compare
a4c50b1
to
f2a2df7
Compare
@@ -34,6 +34,7 @@ import ( | |||
) | |||
|
|||
var ( | |||
hostNetwork = flag.Bool("host-network", false, "pod hostNetwork configuration") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this goes inside mounter.Mount, do we need this flag? I think we pass (or can pass) hostNetwork flag through mountConfig.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can simplify the sidecar injection logic too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We still need this to pass to mount config
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are using the new HostNetwork flag to:
- set mc.HostNetwork
- setup gcs-auth:token-url:...
To avoid creating flags, we can determine if the Pod supports this feature on gcs-fuse-csi-driver
container by checking if hostNetwork
is enabled on the pod and the volume we need is injected. We have a pod informer that has that information.
gcs-fuse-csi-driver/pkg/csi_driver/node.go
Line 137 in e5d8871
pod, err := s.k8sClients.GetPod(vc[VolumeContextKeyPodNamespace], vc[VolumeContextKeyPodName]) |
We can then pass the mountConfig from gcs-fuse-csi-driver
-> gke-gcsfuse-sidecar
driver sending mc to sidecar:
mc := sidecarmounter.MountConfig{ |
sidecar receiving mc from driver:
if err := json.Unmarshal(msg, &mc); err != nil { |
After that, we can process mc to set the configmap with the uds path
func (mc *MountConfig) prepareMountArgs() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey Jaime, thanks for the suggestion. Somehow the resulting pod of k8sclients.GetPod(...) does not persist the HostNetwork value. It is "false" despite my test pod setting. It might be worthwhile to derive the HostNetwork from pod.Annotations["kubectl.kubernetes.io/last-applied-configuration"]
, but it might be very messy.
This is the last-applied annotation.
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"gke-gcsfuse/volumes":"true"},"name":"test-pv-pod-hnw","namespace":"ns1"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","name":"busybox","volumeMounts":[{"mountPath":"/data","name":"gcp-cloud-storage-pvc"},{"mountPath":"/dataEph","name":"gcs-fuse-csi-ephemeral"}]}],"hostNetwork":true,"serviceAccountName":"test-ksa-ns1","volumes":[{"name":"gcp-cloud-storage-pvc","persistentVolumeClaim":{"claimName":"gcp-cloud-storage-csi-static-pvc"}},{"csi":{"driver":"gcsfuse.csi.storage.gke.io","volumeAttributes":{"bucketName":"test-wi-host-network-2","mountOptions":"implicit-dirs"}},"name":"gcs-fuse-csi-ephemeral"}]}}
So far the cleanest and safest solution is passing the bool flag from webhook. What do you think? Let me know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Synced with Jaime offline. PodSpec was filtered in this PR: #413
This approach proves to be unfeasible even after adding back hostNetwork in podSpec. In csi driver, the msg is sent to the listner in the csi path, that looks like "/var/lib/kubelet/pods/c9a5236b-cf41-49c0-bcfe-f61988440b8f/volumes/kubernetes.io~csi/gcs-fuse-csi-ephemeral/mount". The msg received in sidecar mounter is received in the path of the volume socket: "connecting to socket "/gcsfuse-tmp/.volumes/gcs-fuse-csi-ephemeral/socket". They are not the same msg.
Will stick to the original design.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@siyanshen Do we know how all the other options are making their way from the node-server to the sidecar? Regarding the different paths being used, I believe we are using a symbolic link for socket communication. Could you clarify this so going forward we follow a different approach then?
gcs-fuse-csi-driver/pkg/csi_mounter/csi_mounter.go
Lines 211 to 215 in eeeb147
// Create socket base path. | |
// Need to create symbolic link of emptyDirBasePath to socketBasePath, | |
// because the socket absolute path is longer than 104 characters, | |
// which will cause "bind: invalid argument" errors. | |
socketBasePath := util.GetSocketBasePath(target, m.fuseSocketDir) |
f2a2df7
to
db90b77
Compare
/LGTM |
db90b77
to
f0278c6
Compare
lgtm |
Why we need this change
Before this change, GCSFuse CSI driver does not support pods with hostNetwork:true. This is because the gcsfuse process runs as part of a sidecar container that is injected into the user pod. GCSFuse uses the ADC workflow to fetch the necessary token to access a GCS bucket. However, with hostNetwork enabled, GKE metadata server cannot intercept the token requests for
GET /computeMetadata/v1/instance/service-accounts/default/token
endpoint.What is in this change
HostNetwork=true
user pods, GCSFuse CSI webhook injects a projected SA token volume to the user pod.gke-gcsfuse-sidecar
container prepares a unix domain socket and starts a handler to serve requests on this token; invoke gcsfuse with a config option to point to the socketgcs-auth:token-url:<path to the token>
gke-gcsfuse-sidecar
handles token request from GCSFuse.Local set up and testing
Test cases covered
Setup
hostNetwork=true
, GCS bucket mounted as a volume, and service account set as the one you created in step 2. Check I/O to your bucket from a container in your pod.